Human-Centric Machine Learning

NeurIPS 2019 Workshop, Vancouver

Friday, 13 December 2019 -- West Level 2, Room 223-224

Overview

Machine learning (ML) tools are increasingly employed to inform and automate consequential decisions for humans, in areas such as criminal justice, medicine, employment, welfare programs, and beyond. ML has already established its tremendous potential to not only improve the accuracy and cost-efficiency of such decisions but also minimize the impact of certain human biases and prejudices. The technology, however, comes with significant challenges, risks, and potential harms. Examples include (but are not limited to) exacerbating discrimination against historically disadvantaged social groups, threatening democracy, and violating people's privacy. This workshop aims to bring together experts from a diverse set of backgrounds (ML, human-computer interaction, psychology, sociology, ethics, law, and beyond) to better understand the risks and burdens of big data technologies on society, and identify approaches and best practices to maximize the societal benefits of Machine Learning.

The workshop takes a broad perspective on Human-centric ML and addresses a wide range of challenges from diverse, multi-disciplinary viewpoints. We strongly believe that for society to trust and accept the ML technology, we need to ensure the interpretability and fairness of data-driven decisions. We must have reliable mechanisms to guarantee the privacy and security of people's data. We should demand transparency, not just in terms of the disclosure of algorithms, but also in terms of how they are used and for what purposes. And last but not least, we need to have a modern legal framework to provide accountability and allow subjects to dispute and overturn algorithmic decisions when warranted. The workshop particularly encourages papers that take a multi-disciplinary approach to tackle the above challenges.

One of the main goals of this workshop is to help the community understand where it stands after a few years of rapid development and identify promising research directions to pursue in the years to come. We, therefore, encourage authors to think carefully about the practical implications of their work, identify directions for future work, and discuss the challenges ahead.

This workshop is part of the ELLIS “Human-centric Machine Learning” program.

Call for papers and important dates

Topics of interest include but are not limited to:

  • Fairness: algorithmic fairness, human perceptions of fairness, cultural dependencies
  • Transparency & Interpretability: interpretable algorithms, explanations of ML systems, human usability of explanation methods
  • Privacy: relationships between fairness, security, and privacy, alignment between mathematical privacy and people’s perception of privacy
  • Accountability & Governance: existing legal frameworks, how do state-of-the-art technologies for fairness and interpretability comply with regulation such as GDPR, governance examples for human-centric ML decision-making

We accept submissions in the form of extended abstracts. Submission must adhere to the NeurIPS format and be limited to 4 pages, including figures and tables. We allow an unlimited number of pages for references and supplementary material, but reviewers are not required to review the supplementary material.

We accept new submissions, submissions currently under review at another venue, as well as papers that have been accepted elsewhere in an indexed journal or conference earlier this year. Such recently accepted papers must still adhere to the above formatting instructions. In particular they are also limited to 4 pages (not including references and supplementary material). All papers must be anonymized for double-blind reviewing as described in the submission instructions and submitted via EasyChair.

The workshop will not have formal proceedings, but accepted papers will be posted on the workshop website. We emphasize that the workshop is non-archival, so authors can later publish their work in archival venues. Accepted papers will be either presented as a talk or poster (to be determined by the workshop organizers).


Submission deadline: 15 Sep 2019, 23:59 Anywhere on Earth (AoE)

Author notification: 30 Sep 2019, 23:59 Anywhere on Earth (AoE)

Camera-ready deadline: 30 Oct 2019, 23:59 Anywhere on Earth (AoE) -- Please use this style file for the camera-ready version.

Invited Speakers

Schedule

Workshop Schedule

08:30 - 08:45 Welcome and introduction

08:45 - 09:15 Krishna Gummadi (invited talk)

09:15 - 10:00 Contributed talks: Fairness and predictions

• "Learning Representations by Humans, for Humans." Sophie Hilgard, Nir Rosenfeld, Mahzarin Banaji, Jack Cao and David Parkes

• "On the Multiplicity of Predictions in Classification." Charles Marx, Flavio Calmon and Berk Ustun.

• "On the Fairness of Time-Critical Influence Maximization in Social Networks." Junaid Ali, Mahmoudreza Babaei, Abhijnan Chakraborty, Baharan Mirzasoleiman, Krishna P. Gummadi and Adish Singla

10:00 - 10:30 Panel discussion: On the role of industry, academia, and government in developing HCML

10:30 - 11:00 Coffee break

11:00 - 11:30 Deirdre Mulligan (invited talk)

11:30 - 12:00 Contributed talks: Law and Philosophy

  • "On the Legal Compatibility of Fairness Definitions." Alice Xiang and Inioluwa Raji
  • "Methodological Blind Spots in Machine Learning Fairness: Lessons from the Philosophy of Science and Computer Science." Samuel Deng and Achille Varzi

12:00 - 13:30 Lunch and poster session

13:30 - 14:00 Aaron Roth (invited talk)

14:00 - 15:00 Contributed talks: Interpretability

  • "Interpretable and Differentially Private Predictions." Frederik Harder, Matthias Bauer and Mijung Park
  • "Benchmarking Attribution Methods with Ground Truth." Mengjiao Yang and Been Kim
  • "A study of data and label shift in the LIME framework." Amir Hossein Akhavan Rahnama and Henrik Boström
  • "bLIMEy: Surrogate Prediction Explanations Beyond LIME." Kacper Sokol, Alexander Hepburn, Raul Santos-Rodriguez and Peter Flach

15:00 - 15:30 Coffee break

15:30 - 16:00 Finale Doshi-Velez (invited talk)

16:00 - 16:30 Been Kim (invited talk)

16:30 - 17:00 Panel discussion: Future research directions and interdisciplinary collaborations in HCML

17:00 - 18:00 Poster session

18:00 - 18:15 Closing remarks

Accepted Papers

Links to camera-ready versions will be added after 30 October 2019.

Sponsors

Program Committee

  • Alison Smith-Renner
  • Amir Karimi
  • Ana Freire
  • Berk Ustun
  • Bernhard Kainz
  • Borja Balle
  • Chaofan Chen
  • Diego Fioravanti
  • Emilia Gomez
  • Emmanuel Letouzé
  • Eric Wong
  • Francesca Toni
  • Francesco Fabbri
  • Gagan Bansal
  • Gal Yona
  • Gonzalo Ramos
  • John Shawe-Taylor
  • Julius Adebayo
  • Krishnamurthy Dvijotham
  • Lydia Liu
  • Mark Riedl
  • Matthew Kusner
  • Melanie F. Pradier
  • Michael Hind
  • Mijung Park
  • Min Wu
  • Nina Grgic-Hlaca
  • Novi Quadrianto
  • Nozha Boujemaa
  • Ricardo Silva
  • Rohin Shah
  • Sarah Tan
  • Sergio Escalera
  • Vaishak Belle
  • Vlad Estivill-Castro
  • Xiaowei Gu
  • Xiaowei Huang

Organizers

Related workshops @ NeurIPS 2019